57 research outputs found
Proving Weak Approximability Without Algorithms
A boolean predicate is said to be strongly approximation resistant if, given a near-satisfiable instance of its maximum constraint satisfaction problem, it is hard to find an assignment such that the fraction of constraints satisfied deviates significantly from the expected fraction of constraints satisfied by a random assignment. A predicate which is not strongly approximation resistant is known as weakly approximable.
We give a new method for proving the weak approximability of predicates, using a simple SDP relaxation, without designing and analyzing new rounding algorithms for each predicate. Instead, we use the recent characterization of strong approximation resistance by Khot et al. [STOC 2014], and show how to prove that for a given predicate, certain necessary conditions for strong resistance derived from their characterization, are violated. By their result, this implies the existence of a good rounding algorithm, proving weak approximability.
We show how this method can be used to obtain simple proofs of (weak approximability analogues of) various known results on approximability, as well as new results on weak approximability of symmetric predicates
From Weak to Strong LP Gaps for All CSPs
We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by Omega(log(n)/log(log(n))) levels of the Sherali-Adams hierarchy on instances of size n.
It was proved by Chan et al. [FOCS 2013] (and recently strengthened by Kothari et al. [STOC 2017]) that for CSPs, any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than simply the basic LP, which can be thought of as the base level of the Sherali-Adams hierarchy. This essentially gives a dichotomy result for approximation of CSPs by polynomial size LP extended formulations.
Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which Omega(loglog n) levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to Omega(log(n)/log(log(n))) levels
Ellipsoid fitting up to constant via empirical covariance estimation
The ellipsoid fitting conjecture of Saunderson, Chandrasekaran, Parrilo and
Willsky considers the maximum number random Gaussian points in
, such that with high probability, there exists an
origin-symmetric ellipsoid passing through all the points. They conjectured a
threshold of , while until recently, known lower
bounds on the maximum possible were of the form . We
give a simple proof based on concentration of sample covariance matrices, that
with probability , it is possible to fit an ellipsoid through
random Gaussian points. Similar results were also obtained in two
recent independent works by Hsieh, Kothari, Potechin and Xu [arXiv, July 2023]
and by Bandeira, Maillard, Mendelson, and Paquette [arXiv, July 2023].Comment: 11 page
On the Optimality of a Class of LP-based Algorithms
In this paper we will be concerned with a class of packing and covering
problems which includes Vertex Cover and Independent Set. Typically, one can
write an LP relaxation and then round the solution. In this paper, we explain
why the simple LP-based rounding algorithm for the \\VC problem is optimal
assuming the UGC. Complementing Raghavendra's result, our result generalizes to
a class of strict, covering/packing type CSPs
- …